Goto

Collaborating Authors

 stochastic gradient riemannian langevin dynamic



Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex

Neural Information Processing Systems

In this paper we investigate the use of Langevin Monte Carlo methods on the probability simplex and propose a new method, Stochastic gradient Riemannian Langevin dynamics, which is simple to implement and can be applied online. We apply this method to latent Dirichlet allocation in an online setting, and demonstrate that it achieves substantial performance improvements to the state of the art online variational Bayesian methods.


Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex

Neural Information Processing Systems

In this paper we investigate the use of Langevin Monte Carlo methods on the probability simplex and propose a new method, Stochastic gradient Riemannian Langevin dynamics, which is simple to implement and can be applied to large scale data. We apply this method to latent Dirichlet allocation in an online minibatch setting, and demonstrate that it achieves substantial performance improvements over the state of the art online variational Bayesian methods.


Scalable Stochastic Gradient Riemannian Langevin Dynamics in Non-Diagonal Metrics

Yu, Hanlin, Hartmann, Marcelo, Williams, Bernardo, Klami, Arto

arXiv.org Artificial Intelligence

Stochastic-gradient sampling methods are often used to perform Bayesian inference on neural networks. It has been observed that the methods in which notions of differential geometry are included tend to have better performances, with the Riemannian metric improving posterior exploration by accounting for the local curvature. However, the existing methods often resort to simple diagonal metrics to remain computationally efficient. This loses some of the gains. We propose two non-diagonal metrics that can be used in stochastic-gradient samplers to improve convergence and exploration but have only a minor computational overhead over diagonal metrics. We show that for fully connected neural networks (NNs) with sparsity-inducing priors and convolutional NNs with correlated priors, using these metrics can provide improvements. For some other choices the posterior is sufficiently easy also for the simpler metrics.


Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex

Patterson, Sam, Teh, Yee Whye

Neural Information Processing Systems

In this paper we investigate the use of Langevin Monte Carlo methods on the probability simplex and propose a new method, Stochastic gradient Riemannian Langevin dynamics, which is simple to implement and can be applied online. We apply this method to latent Dirichlet allocation in an online setting, and demonstrate that it achieves substantial performance improvements to the state of the art online variational Bayesian methods. Papers published at the Neural Information Processing Systems Conference.


Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex

Patterson, Sam, Teh, Yee Whye

Neural Information Processing Systems

In this paper we investigate the use of Langevin Monte Carlo methods on the probability simplex and propose a new method, Stochastic gradient Riemannian Langevin dynamics, which is simple to implement and can be applied to large scale data. We apply this method to latent Dirichlet allocation in an online minibatch setting,and demonstrate that it achieves substantial performance improvements overthe state of the art online variational Bayesian methods.